Cybersecurity expert urges more accountability and transparency in the use of AI
Sign up now: Get ST's newsletters delivered to your inbox
DEF CON founder Jeff Moss said that noted that more accountability and transparency would help get the best out of AI.
ST PHOTO: JASON QUAH
- Jeff Moss advocates for increased AI accountability and transparency to mitigate risks, ensuring users are not harmed by its vulnerabilities or exploitation by criminals.
- Moss raises ethical concerns about AI's use in warfare, citing the "Where's Daddy?" programme, and warns AI agents could become political without regulation.
- To boost cybersecurity, Moss proposes legal "safe harbour" for "white hat" hackers, allowing experts to research critical system vulnerabilities without fear of litigation.
AI generated
SINGAPORE – When a consumer buys a lock, the seller will highlight only its qualities, as it is not in his interest to talk to the customer about the product’s limitations.
But when experts reveal the lock’s vulnerabilities, the consumer will have a different opinion of its worth.
This concept also applies to artificial intelligence, said computer and internet security expert Jeff Moss, founder of hacking convention DEF CON.
The use of AI is spreading so fast that consumers may neglect to ask how secure it is, with developers extolling only their products’ virtues.
To Mr Moss, there must be more accountability and transparency to mitigate the risks associated with the use of AI by ensuring it is not misused or exploited by criminals.
He raised his concerns during an interview with The Straits Times on April 29 at the Sands Expo and Convention Centre, where DEF CON is being held in Singapore for the first time.
It is running alongside the Milipol TechX Summit (MTX) 2026 from April 28 to 30.
Mr Moss, who has held several prominent cybersecurity roles and was part of the technical consulting team of the hit techno-thriller TV series Mr Robot, said it is vital to discuss accountability in AI.
When an issue arises, there is a question of whether the blame lies with the developer, the company that hired the developer, or the user of the technology, he said.
Apart from the unpredictability arising from a lack of accountability, a lack of transparency in how the technology works or is developed could also lead to opportunities for criminals or nation states to misuse the technology, Mr Moss added.
“When I’m talking to policymakers, I’m always encouraging them to do things that increase accountability and transparency because I think people will generally make better decisions if they have more information,” he said.
In a talk at the MTX, Mr Moss noted that AI has gone from being a novelty to something that can generate value as an agent with the capacity to autonomously perform tasks.
This gives the user control by giving the AI agent parameters and boundaries without needing to be an infrastructure expert.
For example, one could deploy the agent to find the best price for an airline ticket without having to rely on a ticket pricing site, he said.
In an era where almost all technology has a political element, Mr Moss said he would not be surprised if AI agents also become very political very quickly.
And in the absence of any regulation or guidance, developers and users could use technology to do whatever they want with very little consequence, he added.
He noted that more accountability and transparency would help get the best out of AI, as these safeguards would allow policymakers to determine the trade-offs and society to decide what risks it is prepared to take.
“But when everything is opaque or shielded behind an NDA (non-disclosure agreement), or you can’t research it, or can’t understand how an AI agent works because it’s proprietary – the more we prevent transparency, the more problems we’ll have,” he said.
Mr Moss, who was part of the cybersecurity advisory committee in the US Department of Homeland Security’s cybersecurity infrastructure security agency from 2021 to 2025, also highlighted some of the ethical issues on the use of AI in warfare.
Citing the Israel-Gaza war, he said the Israeli military used an AI program called Where’s Daddy? to track enemies to their homes before targeting them there. This meant their family members could be in harm’s way as well.
He said that while AI systems can help with providing targets, they do not take into account traditional ethical considerations.
“It provides incredible speed and awareness, (but) if we can’t solve that problem, all these other problems are just inconsequential to the moral fabric of how you conduct yourself in combat,” he added.
Targeting telcos
When asked about cyberattacks between countries, Mr Moss cited the telco space as being a “jewel” for political interference via cyberattacks by giving attackers access to technical and social networks, including mobile phones.
“This means the attackers can see every politician and who they’re having dinner with. They can use that to determine if their targets are seeing someone else’s wife, meeting the political opposition, (or) spending a lot of time talking to certain companies.
“So much information can be leaked – imagine the potential for blackmail,” he said.
In February, all four major telcos in Singapore were attacked by state-sponsored cyberespionage group UNC3886.
But no sensitive data was accessed or stolen, and critical systems such as the 5G core were not compromised.
To boost cybersecurity, Mr Moss said one way is to give more legal space for good hackers – or “white hats”, as they are known in the cybersphere – to counter bad actors.
Following the 2016 US elections, where election fraud was an issue, a friend suggested to Mr Moss that the voting machines used were insecure.
Mr Moss was surprised that he could buy the machines on eBay. When he dismantled them, he found they had been seriously compromised.
He and his team were allowed to research the devices because of a “safe harbour” exception made to the US’ Digital Millennium Copyright Act that made it legal to hack and research items like medical devices and election technology.
Mr Moss said such laws could allow experts to inspect and explore vulnerabilities without fear of litigation.


